Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GPU] Expand matmul decomp cases #2916

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

kealan-barbieri
Copy link
Contributor

@kealan-barbieri kealan-barbieri commented Mar 18, 2025

Description

Expands cases where inputs are decompressed to include integer activations as well as weights. Integer weights are still required but in the presence of fpmath setting activations will also be upconverted.

This means that cases intending to use integer accumulation must not supply eg attr-fpmath=f16:true as all such cases must be upconverted per https://jira.devtools.intel.com/browse/MFDNN-13380.

Align behavior with doc for application of fpmath to integer inputs, if integer fpmath is true then upconversion must occur, this changes the strategy selection. Relabel strategies that perform upconversion to make them explicit. Update strategy selection to distinguish ordering between different "compound" types eg "[OH]" "[OB]".

Fixes # (github issue)

Checklist

General

  • Do all unit and benchdnn tests (make test and make test_benchdnn_*) pass locally for each commit?
  • Have you formatted the code using clang-format?

@kealan-barbieri kealan-barbieri requested a review from a team as a code owner March 18, 2025 22:34
@github-actions github-actions bot added the platform:gpu-intel Codeowner: @oneapi-src/onednn-gpu-intel label Mar 18, 2025
@kealan-barbieri kealan-barbieri force-pushed the kealanba/gemm_fpmath_fix branch from 2f3e014 to e3a9503 Compare March 19, 2025 00:29
@kealan-barbieri kealan-barbieri requested review from a team as code owners March 19, 2025 00:29
@github-actions github-actions bot added platform:cpu-x64 Intel64/AMD64 processors. Codeowner: @oneapi-src/onednn-cpu-x64 component:tests Codeowner: @oneapi-src/onednn-arch labels Mar 19, 2025
@kealan-barbieri kealan-barbieri changed the base branch from main to dzarukin/allow_int4_odd March 19, 2025 00:29
@dzarukin dzarukin force-pushed the dzarukin/allow_int4_odd branch 3 times, most recently from b34fab9 to 2e2c013 Compare March 19, 2025 19:43
@dzarukin dzarukin force-pushed the dzarukin/allow_int4_odd branch 4 times, most recently from f4d86c8 to 8e04316 Compare March 20, 2025 04:43
@kealan-barbieri kealan-barbieri force-pushed the kealanba/gemm_fpmath_fix branch from e3a9503 to 4375e7a Compare March 21, 2025 16:47
@github-actions github-actions bot removed platform:cpu-x64 Intel64/AMD64 processors. Codeowner: @oneapi-src/onednn-cpu-x64 component:tests Codeowner: @oneapi-src/onednn-arch labels Mar 21, 2025
@kealan-barbieri kealan-barbieri force-pushed the kealanba/gemm_fpmath_fix branch from 4375e7a to 6892eb5 Compare March 21, 2025 17:10
@dzarukin dzarukin force-pushed the dzarukin/allow_int4_odd branch from 8e04316 to 608baa3 Compare March 21, 2025 18:27
@kealan-barbieri kealan-barbieri force-pushed the kealanba/gemm_fpmath_fix branch from 6892eb5 to ebf1fa9 Compare March 24, 2025 16:45
@vpirogov vpirogov force-pushed the dzarukin/allow_int4_odd branch from 608baa3 to f933c1e Compare March 24, 2025 17:50
Base automatically changed from dzarukin/allow_int4_odd to main March 24, 2025 19:24
@kealan-barbieri
Copy link
Contributor Author

make test
disable device_cpu
enable device_gpu
disable benchdnn_all
enable benchdnn_matmul

@kealan-barbieri kealan-barbieri changed the title [GPU] Expand matmul decomp cases [WIP][GPU] Expand matmul decomp cases Mar 26, 2025
@kealan-barbieri kealan-barbieri force-pushed the kealanba/gemm_fpmath_fix branch from ebf1fa9 to d7701d3 Compare March 27, 2025 17:27
@kealan-barbieri
Copy link
Contributor Author

make test
disable device_cpu
enable device_gpu
disable benchdnn_all
enable benchdnn_matmul

@kealan-barbieri kealan-barbieri changed the title [WIP][GPU] Expand matmul decomp cases [GPU] Expand matmul decomp cases Mar 27, 2025
@kealan-barbieri kealan-barbieri force-pushed the kealanba/gemm_fpmath_fix branch from d7701d3 to 7ea8b1d Compare March 27, 2025 23:52
@kealan-barbieri
Copy link
Contributor Author

make test
disable device_cpu
enable device_gpu
disable benchdnn_all
enable benchdnn_matmul
enable benchdnn_ip

@kealan-barbieri kealan-barbieri force-pushed the kealanba/gemm_fpmath_fix branch from 7ea8b1d to 3cce26c Compare April 1, 2025 23:26
@kealan-barbieri
Copy link
Contributor Author

make test
disable device_cpu
enable device_gpu
disable benchdnn_all
enable benchdnn_matmul
enable benchdnn_ip

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform:gpu-intel Codeowner: @oneapi-src/onednn-gpu-intel
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants